144 research outputs found
Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays
Recent breakthroughs suggest that local, approximate gradient descent
learning is compatible with Spiking Neural Networks (SNNs). Although SNNs can
be scalably implemented using neuromorphic VLSI, an architecture that can learn
in-situ as accurately as conventional processors is still missing. Here, we
propose a subthreshold circuit architecture designed through insights obtained
from machine learning and computational neuroscience that could achieve such
accuracy. Using a surrogate gradient learning framework, we derive local,
error-triggered learning dynamics compatible with crossbar arrays and the
temporal dynamics of SNNs. The derivation reveals that circuits used for
inference and training dynamics can be shared, which simplifies the circuit and
suppresses the effects of fabrication mismatch. We present SPICE simulations on
XFAB 180nm process, as well as large-scale simulations of the spiking neural
networks on event-based benchmarks, including a gesture recognition task. Our
results show that the number of updates can be reduced hundred-fold compared to
the standard rule while achieving performances that are on par with the
state-of-the-art
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions
Increasing popularity of deep-learning-powered applications raises the issue
of vulnerability of neural networks to adversarial attacks. In other words,
hardly perceptible changes in input data lead to the output error in neural
network hindering their utilization in applications that involve decisions with
security risks. A number of previous works have already thoroughly evaluated
the most commonly used configuration - Convolutional Neural Networks (CNNs)
against different types of adversarial attacks. Moreover, recent works
demonstrated transferability of the some adversarial examples across different
neural network models. This paper studied robustness of the new emerging models
such as SpinalNet-based neural networks and Compact Convolutional Transformers
(CCT) on image classification problem of CIFAR-10 dataset. Each architecture
was tested against four White-box attacks and three Black-box attacks. Unlike
VGG and SpinalNet models, attention-based CCT configuration demonstrated large
span between strong robustness and vulnerability to adversarial examples.
Eventually, the study of transferability between VGG, VGG-inspired SpinalNet
and pretrained CCT 7/3x1 models was conducted. It was shown that despite high
effectiveness of the attack on the certain individual model, this does not
guarantee the transferability to other models.Comment: 18 page
AudioFool: Fast, Universal and synchronization-free Cross-Domain Attack on Speech Recognition
Automatic Speech Recognition systems have been shown to be vulnerable to
adversarial attacks that manipulate the command executed on the device. Recent
research has focused on exploring methods to create such attacks, however, some
issues relating to Over-The-Air (OTA) attacks have not been properly addressed.
In our work, we examine the needed properties of robust attacks compatible with
the OTA model, and we design a method of generating attacks with arbitrary such
desired properties, namely the invariance to synchronization, and the
robustness to filtering: this allows a Denial-of-Service (DoS) attack against
ASR systems. We achieve these characteristics by constructing attacks in a
modified frequency domain through an inverse Fourier transform. We evaluate our
method on standard keyword classification tasks and analyze it in OTA, and we
analyze the properties of the cross-domain attacks to explain the efficiency of
the approach.Comment: 10 pages, 11 Figure
- …